142 research outputs found

    Kernel Ellipsoidal Trimming

    No full text
    Ellipsoid estimation is an issue of primary importance in many practical areas such as control, system identification, visual/audio tracking, experimental design, data mining, robust statistics and novelty/outlier detection. This paper presents a new method of kernel information matrix ellipsoid estimation (KIMEE) that finds an ellipsoid in a kernel defined feature space based on a centered information matrix. Although the method is very general and can be applied to many of the aforementioned problems, the main focus in this paper is the problem of novelty or outlier detection associated with fault detection. A simple iterative algorithm based on Titterington's minimum volume ellipsoid method is proposed for practical implementation. The KIMEE method demonstrates very good performance on a set of real-life and simulated datasets compared with support vector machine methods

    Hierarchical Gaussian process mixtures for regression

    Get PDF
    As a result of their good performance in practice and their desirable analytical properties, Gaussian process regression models are becoming increasingly of interest in statistics, engineering and other fields. However, two major problems arise when the model is applied to a large data-set with repeated measurements. One stems from the systematic heterogeneity among the different replications, and the other is the requirement to invert a covariance matrix which is involved in the implementation of the model. The dimension of this matrix equals the sample size of the training data-set. In this paper, a Gaussian process mixture model for regression is proposed for dealing with the above two problems, and a hybrid Markov chain Monte Carlo (MCMC) algorithm is used for its implementation. Application to a real data-set is reported

    Batch classifications with discrete finite mixtures

    Full text link

    D-optimal designs via a cocktail algorithm

    Get PDF
    A fast new algorithm is proposed for numerical computation of (approximate) D-optimal designs. This "cocktail algorithm" extends the well-known vertex direction method (VDM; Fedorov 1972) and the multiplicative algorithm (Silvey, Titterington and Torsney, 1978), and shares their simplicity and monotonic convergence properties. Numerical examples show that the cocktail algorithm can lead to dramatically improved speed, sometimes by orders of magnitude, relative to either the multiplicative algorithm or the vertex exchange method (a variant of VDM). Key to the improved speed is a new nearest neighbor exchange strategy, which acts locally and complements the global effect of the multiplicative algorithm. Possible extensions to related problems such as nonparametric maximum likelihood estimation are mentioned.Comment: A number of changes after accounting for the referees' comments including new examples in Section 4 and more detailed explanations throughou

    A spatial interaction model for deriving joint space maps of bundle compositions and market segments from pick-any/J data: An application to new product options

    Full text link
    We propose an approach for deriving joint space maps of bundle compositions and market segments from three-way (e.g., consumers x product options/benefits/features x usage situations/scenarios/time periods) pick-any/J data. The proposed latent structure multidimensional scaling procedure simultaneously extracts market segment and product option positions in a joint space map such that the closer a product option is to a particlar segment, the higher the likelihood of its being chosen by that segment. A segment-level threshold parameter is estimated that spatially delineates the bundle of product options that are predicted to be chosen by each segment. Estimates of the probability of each consumer belonging to the derived segments are simultaneously obtained. Explicit treatment of product and consumer characteristics are allowed via optional model reparameterizations of the product option locations and segment memberships. We illustrate the use of the proposed approach using an actual commercial application involving pick-any/J data gathered by a major hi-tech firm for some 23 advanced technological options for new automobiles.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/47207/1/11002_2004_Article_BF00434905.pd

    Some Aspects of Latent Structure Analysis

    Get PDF
    Latent structure models involve real, potentially observable variables and latent, unobservable variables. The framework includes various particular types of model, such as factor analysis, latent class analysis, latent trait analysis, latent profile models, mixtures of factor analysers, state-space models and others. The simplest scenario, of a single discrete latent variable, includes finite mixture models, hidden Markov chain models and hidden Markov random field models. The paper gives a brief tutorial of the application of maximum likelihood and Bayesian approaches to the estimation of parameters within these models, emphasising especially the fact that computational complexity varies greatly among the different scenarios. In the case of a single discrete latent variable, the issue of assessing its cardinality is discussed. Techniques such as the EM algorithm, Markov chain Monte Carlo methods and variational approximations are mentioned
    • …
    corecore